2023-08-18 16:21:20.AIbase.643
A Glitch That Allowed ChatGPT to Escape! Disordered Prompts Enable LLM to Rapidly Generate Ransomware, Jim Fan Stunned
Foreign netizens have discovered a new jailbreak technique that allows ChatGPT to generate ransomware using disordered prompts. By taking advantage of the human brain's ability to understand scrambled words and phrases, they bypass security filters. Jim Fan was amazed at the GPT model's comprehension of disordered words.